Goto

Collaborating Authors

 provably robust metric learning


Provably Robust Metric Learning

Neural Information Processing Systems

Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied. In this paper, we show that existing metric learning algorithms, which focus on boosting the clean accuracy, can result in metrics that are less robust than the Euclidean distance. To overcome this problem, we propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations, and the robustness of the resulting model is certifiable. Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors (errors under adversarial attacks). Furthermore, unlike neural network defenses which usually encounter a trade-off between clean and robust errors, our method does not sacrifice clean errors compared with previous metric learning methods.


Review for NeurIPS paper: Provably Robust Metric Learning

Neural Information Processing Systems

Summary and Contributions: The paper presents a mahalanobis learning algorithm that is certifiable robust to adversarial attacks. The algorithm learns a Mahalabobis matrix which maximizes the minimal adversarial attack on each example. The method is compared against standard learning algorithms on a series of datasets and show that indeed the proposed algorithm has a good robustness to attacks, exhibiting the lowest values of robust error, and often has also the lowest error. To learn the Mahalanobis matrix it defines an objective it establishes a lower bound for minimal adversarial perturbation of some training instance that is parametrized by the Mahalanobis matrix. The bound is based on the minimal perturbation that given an instance and a negative and a positive instance will change the nearest neighbor relation.


Review for NeurIPS paper: Provably Robust Metric Learning

Neural Information Processing Systems

This paper proposes a metric learning method for robust KNN inference against adversarial examples. Advantages / Main pain points: - First certifiable robust metric learning but lacks comparison to robust metric learning - Authors added in the rebuttal comparison of radius KNN to deep networks and showed good results on mnist and fashion mnist Inconvenient: - Lack of comparison to previous methods doing robust metric learning This paper received mixed initial scores, that sparked a fruitful discussion phase. A consensus emerged between reviewers and AC that the contributions outweigh the execution flaws, and therefore we recommend this work for acceptance. We encourage the authors to add all relevant previous works pointed by reviewers and to add also baselines of robust metric learning.


Provably Robust Metric Learning

Neural Information Processing Systems

Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied. In this paper, we show that existing metric learning algorithms, which focus on boosting the clean accuracy, can result in metrics that are less robust than the Euclidean distance. To overcome this problem, we propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations, and the robustness of the resulting model is certifiable. Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors (errors under adversarial attacks). Furthermore, unlike neural network defenses which usually encounter a trade-off between clean and robust errors, our method does not sacrifice clean errors compared with previous metric learning methods.